Speeding up host TCP metric collection

11 November 2015 Leave a comment

We currently use Sensu to monitor our environment, and I’ve taken to using standalone checks to collect various metrics. Standalone metrics don’t rely on the server to issue a check request which provides more reliable interval between checks. One of the metrics we collect is the number of TCP sockets in each of the possible states on each server. We started off using the metrics-netstat-tcp.rb check from the excellent set of Sensu community plugins.

This plugin was doing the job quite nicely, until I noticed that some of our machines had widely varying intervals for publishing this data, especially when under load. This started to be noticeable once the server passed roughly 10k connections, and got worse as the number of connections increased. Given that it’s not uncommon for some of our servers to handle in the region of 100k connections during busy times, I decided to have a closer look at what was going on. Closer inspection on one of the servers revealed that the script was pegging a CPU core at 100% and still taking around 10s to complete when a server had ~60k TCP connections in various states – not a good use of valuable resources.

Taking a look at the code of this plugin for the first time, everything looked pretty reasonable and nicely readable, but the large regular expression running for every line in /proc/net/tcp looks glaringly suspicious. As my skills with awk are greater than my skills with ruby, I decided it would be quicker for me to simply rewrite the check using a tool that was built for running efficiently over large text files. The result a few minutes later was metrics-netstat-tcp.awk. Although the parameters are not the same, the output and functionality matches making it an almost but not quite drop in replacement.

The more important feature for me though is that collecting the metrics on a machine with ~60k connections now completes in under 60ms instead of around 10s. Hopefully the lesson for everyone else is that the older tools are still around for a reason, and you need to know when and how to pick the right tool for the right job.

Categories: DevOps, Linux, Monitoring, Networks

ZFS on Linux ‘insufficient replicas’ panic

17 December 2014 Leave a comment

I run a lovely little HP N54L MicroServer at home to keep all my important bits. It’s been a faithful companion for many years across two continents. I’m running Ubuntu LTS on it, booting off a small SSD but keeping years worth of backups across two ZFS mirrors.

I discovered this evening that the little PCIe card I was using for my boot drive had failed. There’s a spare SATA port on the motherboard I never bothered using (it’s only SATA II, the SSD is SATA III), so I just pulled out the old card and booted off the onboard controller. Imagine the horror when I got the following response to my zpool status after the first boot:

root@dumpy:~# zpool status
  pool: first
 state: UNAVAIL
status: One or more devices could not be used because the label is missing
	or invalid.  There are insufficient replicas for the pool to continue
	functioning.
action: Destroy and re-create the pool from
	a backup source.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	first       UNAVAIL      0     0     0  insufficient replicas
	  mirror-0  UNAVAIL      0     0     0  insufficient replicas
	    sda     UNAVAIL      0     0     0
	    sdb     FAULTED      0     0     0  corrupted data

  pool: second
 state: UNAVAIL
status: One or more devices could not be used because the label is missing
	or invalid.  There are insufficient replicas for the pool to continue
	functioning.
action: Destroy and re-create the pool from
	a backup source.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	second      UNAVAIL      0     0     0  insufficient replicas
	  mirror-0  UNAVAIL      0     0     0  insufficient replicas
	    sdc     FAULTED      0     0     0  corrupted data
	    sdd     FAULTED      0     0     0  corrupted data

The whole point of having two separate mirrors was so that bad things like this would need something more serious than an unconnected controller failure corrupting them!

After taking a deep breath I had a look at the data again, and at the rest of my system. /dev/sda was now my boot SSD, but ZFS thought it was part of an array. Looks like using the on board port had shuffled drive names around. This data is stored in /etc/zfs/zpool.cache to speed up mounting on boot. Moving drives around had invalidated this information.

So, I did the following:

  • rm /etc/zfs/zpool.cache
  • Rebooted the machine (unloading the ZFS modules should also theoretically work)
  • zpool import <my pools>

And all my bits were back in the correct order!

root@dumpy:~# zpool status
  pool: first
 state: ONLINE
  scan: scrub repaired 0 in 3h27m with 0 errors on Sun Dec 14 03:27:14 2014
config:

	NAME                                          STATE     READ WRITE CKSUM
	first                                         ONLINE       0     0     0
	  mirror-0                                    ONLINE       0     0     0
	    ata-WDC_WD20EARX-00PASB0_WD-WMAZA6447754  ONLINE       0     0     0
	    ata-WDC_WD20EARX-00PASB0_WD-WMAZA6448154  ONLINE       0     0     0

errors: No known data errors

  pool: second
 state: ONLINE
  scan: scrub repaired 0 in 9h42m with 0 errors on Sun Dec 14 09:42:32 2014
config:

	NAME                                          STATE     READ WRITE CKSUM
	second                                        ONLINE       0     0     0
	  mirror-0                                    ONLINE       0     0     0
	    ata-WDC_WD20EARS-00J2GB0_WD-WCAYY0231617  ONLINE       0     0     0
	    ata-WDC_WD20EARS-00J2GB0_WD-WCAYY0221030  ONLINE       0     0     0

errors: No known data errors

I initially created the system with an early 0.6.0 release candidate of ZFS on Linux, which is why it was doing something as silly as identifying drives by /dev/sd? in the first place. Now I’m running on the 0.6.3 release I’m happy to see it using drive serial numbers instead.

Hopefully this information will save someone from blowing away a valid mirror and having to restore from backups…

Categories: Linux Tags: ,

High Resolution Timestamps with Logstash and rsyslog

It’s been way too long since I’ve posted anything here, so I’ve decided to resume again with what I’ve been busy with lately.

We’ve been using Logstash for quite a while now, and one of the annoyances I’ve had is that with the default configuration we threw together we only got second resolution on our timestamps. 99% of the time this has been good enough for the simple things we’ve been trying to keep an eye on, but now and again it’s resulted trouble stitching together the exact sequence of events when trying to diagnose a problem.

Rsyslog is the primary tool we use to get data into Logstash in the first place. As we don’t have the luxury of using anything cloud oriented we need to care about our hardware too, and using syslog makes this very easy. Add to this how simple it is to get most applications to log via syslog too and it makes using it a no brainer.

Getting the high resolution timestamps into your rsyslog messages is actually really simple. Just make sure you have the following configuration option set in your rsyslog configuration:

$ActionForwardDefaultTemplate RSYSLOG_ForwardFormat
Categories: DevOps, Monitoring, Unix

The story of Ecks

13 September 2011 1 comment

I’ve just release Ecks into the wild, a Python library for accessing SNMP data from a server without having to deal with the pain of knowing about what a MIB or OID is. SNMP stands for Simple Network Management Protocol, but for most people it is anything but simple. It’s pretty straight forward once you understand what’s going on, but most people are daunted by the learning curve.

What results from this resistance is that when your average developer decides he wants to monitor CPU usage or disk space on his machine he or she ends up doing it in the most obtrusive way possible – SSH. While I’m a big fan of small shell scripts, this is one place they do not belong. Let me give you an example:

I set up a new server here in London for one of our Chicago teams. Being a conscientious team, the first thing they did was wire in some monitoring that wrote for their servers. It checks things like disk space, memory usage, CPU load and the state of various processes that they care about. They need pretty fine grained checking intervals, so they check these every minute. The easiest way they know how to do this though is to SSH in to their machines and run df, free, netstat,etc and scrape the output. Every minute. Which on this nice shiny server consumed almost 20% of the CPU right off the bat. Educating them on the use of SSH ControlMaster helped, but it’s still doing a lot of work on the machine.

This was the last straw that lead to the creation of Ecks. People will always follow the path of least resistance, so if you want people to do the right thing, you need to make it the easiest thing to do. SNMP has all this information available, modern snmpd implementations are stable, have a tiny footprint and are more secure than providing SSH access to your machine.

The hardest part of all though is what to name this little library. When discussing the problem with Julian Simpson (the @builddoctor), he pointed out that MIB always reminded him of the Men in Black. Reading the Wikipedia article on the original comic book series had some interesting snippets:

The Men in Black are a secret organization that monitors and suppresses paranormal activity on Earth…

Replace “Earth” with “a computer” and you’re starting to get somewhere. Then I noticed this gem:

 An agent named Ecks went rogue after learning the truth behind the MiB: they seek to manipulate and reshape the world in their own image by keeping the supernatural hidden.

Many people think that the complexity of the MIB keeps SNMP data hidden. And so the name was chosen…

Categories: DevOps, Networks, Software

£106.50 per Terabyte Storage Server

With the price of storage dropping all the time, there is a constant perception from people who don’t deal with it every day that “disk space is cheap”, especially when it comes to developers. The problem is that so called “Enterprise” storage costs are still astronomical compared to what people are used to paying for home storage – even when using SATA disks.

A lot of this extra cost comes from a perceived requirement for the highest available capacity, availability and performance. Achieving all three characteristics is expensive, but if you’re willing to sacrifice one of them then costs start to fall considerably. Lowering requirements on two of the three drops it even more.

One of the teams I work with has a requirement primarily on capacity. Performance and availability are nice, but capacity is the key. We generate gigabytes worth of log files every day, but didn’t have one place to store it all for easy analysis. Just before I joined the team they’d purchased the cheapest “Enterprise” storage system the IT team at the time would allow – it ended up costing in the region of £12k for 12TB of raw storage. That’s £1000 per TB!

In addition to the price, the other problems were accessibility and management of the data and managing growth. This inspired a hunt for something that would provide a cheaper and more flexible solution.

Our requirements were:

  • *nix based system. The current storage solution was based on Windows Storage Server, but all our systems and tools for this team are Linux based. Yes, Windows does technically provide things like an NFS server, but fighting with the file system permissions and overall performance are two things that impacted us.
  • Cheap to expand. We need to have a clear path to grow the storage in the server easily by simply adding more disks.
  • Large filesystems. There’s nothing more wasteful from a storage point of view than having lots of small filesystems. Besides the management overhead, there’s also many wasted blocks lying around un-used.
  • Cheap to build. This inevitably means commodity hardware.
  • Reasonable availability. We don’t need 99.999% uptime, but would be happy with somewhere in the region of 90%+
  • Reasonable performance. Primary access to the data on this machine is via gigabit Ethernet. As long as it can keep up with the network card we’re happy…

Read more…

Categories: Linux, Solaris, Stuff, Unix

DevOps: State of the Nation

4 December 2010 Leave a comment

When I got back from DevOpsDays in Hamburg this year I felt the need to explain my journey to the “DevOps” world and my view on where it’s headed. I started writing a State of the Nation paper to lay it all out. A couple of weeks in I got a message from Matthias Marschall asking if I’d like to do a guest post as part of their DevOps series. I agreed, and after a lot of effort (and help from a couple of great editors) you can now read it here.

It’s by far the longest article I’ve ever written, and I was amazed at how ideas that had been floating around in my head for a while crystalized through the processes of writing them down. I found I got so passionate talking to people about what was in it that I’ve decided to make a talk out of it, the first iteration of which will be in Chicago on Tuesday (see previous post).

Speaking at Chicago DevOps Meetup

29 November 2010 Leave a comment

I’ll be in Chicago next week and I’ll be speaking at the local DevOps group meetup on the 7th of December. Places are limited, so please sign up here if you’d like to attend.

Categories: DevOps Tags: , ,

Vote for XFD in the Ultimate Wallboard Challenge

23 November 2010 Leave a comment

Julian Simpson, aka The Build Doctor, has been working away at a nice web based Build Status Monitor called XFD for a while now. One of my complaints for years has been that there’s no nice build status tool that’s easy to use, but I think he’s on to something.

It’s entered in the Ultimate Wallboard Challenge, and you can vote for it here.

Discussion on DevOps available at DevOpsCafe

18 November 2010 Leave a comment

The week before DevOpsDays in Hamburg this year the DevOpsCafe guys got a group of us got together to talk DevOps in London. This is now available online here.

Categories: DevOps Tags: ,

Have we ESCaped Continuous Delivery?

7 October 2010 Leave a comment

A few years ago Martin Fowler introduced me and a few other ThoughtWorkers who were involved in the Continuous Integration and Deployment space to an editor he knew and told us we needed to write a book about what we were doing. The key thing we were focusing on was making sure that quality software could be released to production in a reliable and repeatable way. Software has absolutely no value until it’s running in production. If you can’t get it into production quickly and easily then you’re just wasting time and money.

There were a few false starts, a few changes of crew, but eventually Jez Humble and Dave Farley stuck it out all the way to the end and Continuous Delivery was published this year. No matter what you do in your organisation, if you’re filled with dread at the thought of a new software release then you need to buy it and read it and do what it says. Now. This article will still be here after you’ve ordered it. Although I didn’t have enough time to be a big contributor, Jez would drag me in front of a white board when ever our paths crossed to discuss things and kept on sending me drafts of key chapters that I have specific interest in and so I am proud to have helped in at least some small way.

One of the things he did do was include a mention to a project that Tom Sulston and I started to make configuration management easier. It’s called ESCape, and I’ve written and talked about it a few times in a few different places. As more and more people are reading the book though I’m getting more questions about what the status of the project is and what our plans are going forward.

At the moment the project is in hibernation. We’ve not made any changes for over a year now, and I don’t think I’ll be working on it in its current incarnation any time soon. That does not mean it’s not based on a good idea though! It’s more a problem of implementation.

At its heart, ESCape is supposed to be a simple way to manage a hierarchal key/value store. I like the way the UI works, Dan North even has a wonderful acronym for it that I can’t for the life of me remember right now. The real problem with the current design is how we’re storing the data. Trying to wedge that kind of data into a relation database always felt dirty and I decided to stop before any real damage was done.

What I’d like to do though is to take the existing UI and functionality and use something like Neo4J or CouchDB to store the data. Conversations I’ve had with Jim Webber and Ian Robinson about it were one of the reasons I didn’t start immediately on a replacement as at the time Jim was making plans to write the REST interface into Neo4J. As an early release of it is now available I guess I’ve run out of excuses…